The Sensory Consequences of Speaking: Parametric Neural Cancellation during Speech in Auditory Cortex
نویسندگان
چکیده
When we speak, we provide ourselves with auditory speech input. Efficient monitoring of speech is often hypothesized to depend on matching the predicted sensory consequences from internal motor commands (forward model) with actual sensory feedback. In this paper we tested the forward model hypothesis using functional Magnetic Resonance Imaging. We administered an overt picture naming task in which we parametrically reduced the quality of verbal feedback by noise masking. Presentation of the same auditory input in the absence of overt speech served as listening control condition. Our results suggest that a match between predicted and actual sensory feedback results in inhibition of cancellation of auditory activity because speaking with normal unmasked feedback reduced activity in the auditory cortex compared to listening control conditions. Moreover, during self-generated speech, activation in auditory cortex increased as the feedback quality of the self-generated speech decreased. We conclude that during speaking early auditory cortex is involved in matching external signals with an internally generated model or prediction of sensory consequences, the locus of which may reside in auditory or higher order brain areas. Matching at early auditory cortex may provide a very sensitive monitoring mechanism that highlights speech production errors at very early levels of processing and may efficiently determine the self-agency of speech input.
منابع مشابه
The auditory representation of speech sounds in human motor cortex
In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex ...
متن کاملSensory-motor integration during speech production localizes to both left and right plana temporale.
Speech production relies on fine voluntary motor control of respiration, phonation, and articulation. The cortical initiation of complex sequences of coordinated movements is thought to result in parallel outputs, one directed toward motor neurons while the "efference copy" projects to auditory and somatosensory fields. It is proposed that the latter encodes the expected sensory consequences of...
متن کاملCohesion and Joint Speech: Right Hemisphere Contributions to Synchronized Vocal Production.
UNLABELLED Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or its role in establishing and maintaining group cohe...
متن کاملA comparison of sensory-motor activity during speech in first and second languages.
A foreign language (L2) learned after childhood results in an accent. This functional neuroimaging study investigated speech in L2 as a sensory-motor skill. The hypothesis was that there would be an altered response in auditory and somatosensory association cortex, specifically the planum temporale and parietal operculum, respectively, when speaking in L2 relative to L1, independent of rate of ...
متن کاملNeural correlates of verbal feedback processing: an fMRI study employing overt speech.
Speakers use external auditory feedback to monitor their own speech. Feedback distortion has been found to increase activity in the superior temporal areas. Using fMRI, the present study investigates the neural correlates of processing verbal feedback without distortion. In a blocked design, the following conditions were presented: (1) overt picture-naming, (2) overt picture-naming while pink n...
متن کامل